23,571 research outputs found

    Smooth Solutions and Discrete Imaginary Mass of the Klein-Gordon Equation in the de Sitter Background

    Full text link
    Using methods in the theory of semisimple Lie algebras, we can obtain all smooth solutions of the Klein-Gordon equation on the 4-dimensional de Sitter spacetime (dS^4). The mass of a Klein-Gordon scalar on dS^4 is related to an eigenvalue of the Casimir operator of so(1,4). Thus it is discrete, or quantized. Furthermore, the mass m of a Klein-Gordon scalar on dS^4 is imaginary: m^2 being proportional to -N(N+3), with N >= 0 an integer.Comment: 23 pages, 4 figure

    Statistical inference for semiparametric varying-coefficient partially linear models with error-prone linear covariates

    Full text link
    We study semiparametric varying-coefficient partially linear models when some linear covariates are not observed, but ancillary variables are available. Semiparametric profile least-square based estimation procedures are developed for parametric and nonparametric components after we calibrate the error-prone covariates. Asymptotic properties of the proposed estimators are established. We also propose the profile least-square based ratio test and Wald test to identify significant parametric and nonparametric components. To improve accuracy of the proposed tests for small or moderate sample sizes, a wild bootstrap version is also proposed to calculate the critical values. Intensive simulation experiments are conducted to illustrate the proposed approaches.Comment: Published in at http://dx.doi.org/10.1214/07-AOS561 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Generic Path Algorithm for Regularized Statistical Estimation

    Full text link
    Regularization is widely used in statistics and machine learning to prevent overfitting and gear solution towards prior information. In general, a regularized estimation problem minimizes the sum of a loss function and a penalty term. The penalty term is usually weighted by a tuning parameter and encourages certain constraints on the parameters to be estimated. Particular choices of constraints lead to the popular lasso, fused-lasso, and other generalized l1l_1 penalized regression methods. Although there has been a lot of research in this area, developing efficient optimization methods for many nonseparable penalties remains a challenge. In this article we propose an exact path solver based on ordinary differential equations (EPSODE) that works for any convex loss function and can deal with generalized l1l_1 penalties as well as more complicated regularization such as inequality constraints encountered in shape-restricted regressions and nonparametric density estimation. In the path following process, the solution path hits, exits, and slides along the various constraints and vividly illustrates the tradeoffs between goodness of fit and model parsimony. In practice, the EPSODE can be coupled with AIC, BIC, CpC_p or cross-validation to select an optimal tuning parameter. Our applications to generalized l1l_1 regularized generalized linear models, shape-restricted regressions, Gaussian graphical models, and nonparametric density estimation showcase the potential of the EPSODE algorithm.Comment: 28 pages, 5 figure

    Multi-View Active Learning in the Non-Realizable Case

    Full text link
    The sample complexity of active learning under the realizability assumption has been well-studied. The realizability assumption, however, rarely holds in practice. In this paper, we theoretically characterize the sample complexity of active learning in the non-realizable case under multi-view setting. We prove that, with unbounded Tsybakov noise, the sample complexity of multi-view active learning can be O~(log1ϵ)\widetilde{O}(\log\frac{1}{\epsilon}), contrasting to single-view setting where the polynomial improvement is the best possible achievement. We also prove that in general multi-view setting the sample complexity of active learning with unbounded Tsybakov noise is O~(1ϵ)\widetilde{O}(\frac{1}{\epsilon}), where the order of 1/ϵ1/\epsilon is independent of the parameter in Tsybakov noise, contrasting to previous polynomial bounds where the order of 1/ϵ1/\epsilon is related to the parameter in Tsybakov noise.Comment: 22 pages, 1 figur
    corecore